strength parameter
Surprisingly Popular Voting for Concentric Rank-Order Models
Hosseini, Hadi, Mandal, Debmalya, Puhan, Amrit
An important problem on social information sites is the recovery of ground truth from individual reports when the experts are in the minority. The wisdom of the crowd, i.e. the collective opinion of a group of individuals fails in such a scenario. However, the surprisingly popular (SP) algorithm~\cite{prelec2017solution} can recover the ground truth even when the experts are in the minority, by asking the individuals to report additional prediction reports--their beliefs about the reports of others. Several recent works have extended the surprisingly popular algorithm to an equivalent voting rule (SP-voting) to recover the ground truth ranking over a set of $m$ alternatives. However, we are yet to fully understand when SP-voting can recover the ground truth ranking, and if so, how many samples (votes and predictions) it needs. We answer this question by proposing two rank-order models and analyzing the sample complexity of SP-voting under these models. In particular, we propose concentric mixtures of Mallows and Plackett-Luce models with $G (\ge 2)$ groups. Our models generalize previously proposed concentric mixtures of Mallows models with $2$ groups, and we highlight the importance of $G > 2$ groups by identifying three distinct groups (expert, intermediate, and non-expert) from existing datasets. Next, we provide conditions on the parameters of the underlying models so that SP-voting can recover ground-truth rankings with high probability, and also derive sample complexities under the same. We complement the theoretical results by evaluating SP-voting on simulated and real datasets.
- North America > United States > Michigan (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Neural Contrast: Leveraging Generative Editing for Graphic Design Recommendations
Lupascu, Marian, Mironica, Ionut, Stupariu, Mihai-Sorin
Creating visually appealing composites requires optimizing both text and background for compatibility. Previous methods have focused on simple design strategies, such as changing text color or adding background shapes for contrast. These approaches are often destructive, altering text color or partially obstructing the background image. Another method involves placing design elements in non-salient and contrasting regions, but this isn't always effective, especially with patterned backgrounds. To address these challenges, we propose a generative approach using a diffusion model. This method ensures the altered regions beneath design assets exhibit low saliency while enhancing contrast, thereby improving the visibility of the design asset.
Learning Recourse Costs from Pairwise Feature Comparisons
Rawal, Kaivalya, Lakkaraju, Himabindu
This paper presents a novel technique for incorporating user input when learning and inferring In high stakes decision settings such as credit scoring, processing user preferences. When trying to provide users bail applications, or making hiring decisions, applicants of black-box machine learning models with actionable often seek recourse to correct unfavourable predicted recourse, we often wish to incorporate outcomes for the future. In these scenarios, since there their personal preferences about the ease of modifying can be multiple possible recourses for each individual, feasibility each individual feature. These recourse considerations, user preferences, and heuristics to finding algorithms usually require an exhaustive minimize the size of the proposed modifications are used to set of tuples associating each feature to its cost guide the search for appropriate recourses (Poyiadzi et al., of modification. Since it is hard to obtain such 2020; Pawelczyk et al., 2020; Joshi et al., 2019). Recourse costs by directly surveying humans, in this paper, search algorithms thus return the best possible recourse we propose the use of the Bradley-Terry model based on these considerations by performing a search over to automatically infer feature-wise costs using the feature-space of the model.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
Probabilistic modeling of discrete structural response with application to composite plate penetration models
Bhaduri, Anindya, Meyer, Christopher S., Gillespie, John W. Jr., Haque, Bazle Z., Shields, Michael D., Graham-Brady, Lori
Discrete response of structures is often a key probabilistic quantity of interest. For example, one may need to identify the probability of a binary event, such as, whether a structure has buckled or not. In this study, an adaptive domain-based decomposition and classification method, combined with sparse grid sampling, is used to develop an efficient classification surrogate modeling algorithm for such discrete outputs. An assumption of monotonic behaviour of the output with respect to all model parameters, based on the physics of the problem, helps to reduce the number of model evaluations and makes the algorithm more efficient. As an application problem, this paper deals with the development of a computational framework for generation of probabilistic penetration response of S-2 glass/SC-15 epoxy composite plates under ballistic impact. This enables the computationally feasible generation of the probabilistic velocity response (PVR) curve or the $V_0-V_{100}$ curve as a function of the impact velocity, and the ballistic limit velocity prediction as a function of the model parameters. The PVR curve incorporates the variability of the model input parameters and describes the probability of penetration of the plate as a function of impact velocity.
- North America > United States > Delaware > New Castle County > Newark (0.14)
- North America > United States > Massachusetts > Middlesex County > Natick (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- (4 more...)
- Government (0.46)
- Materials (0.46)
Bringing Industry 4.0 to You - ProcessMiner
For most of us, the building years of our lives were shaped by the books we read. A generation acquired its knowledge caressing through dry books with stenciled alphabets. From learning our ABCs to Shakespeare's sonnets, Industrial Revolution ensured that its role in shaping world history through its machinery, chemicals, steam and more, is kept alive and documented via its own production of printing machines. The Industrial Revolution paved the way for the life we know today and far surpassed the era of simplistic conveyor belts and heavy manual surveillance. Production lines employ machinery and humans alike.
Guided Dropout
Keshari, Rohit, Singh, Richa, Vatsa, Mayank
Dropout is often used in deep neural networks to prevent over-fitting. Conventionally, dropout training invokes \textit{random drop} of nodes from the hidden layers of a Neural Network. It is our hypothesis that a guided selection of nodes for intelligent dropout can lead to better generalization as compared to the traditional dropout. In this research, we propose "guided dropout" for training deep neural network which drop nodes by measuring the strength of each node. We also demonstrate that conventional dropout is a specific case of the proposed guided dropout. Experimental evaluation on multiple datasets including MNIST, CIFAR10, CIFAR100, SVHN, and Tiny ImageNet demonstrate the efficacy of the proposed guided dropout.
Adaptive Polling for Information Aggregation
Pfeiffer, Thomas (Harvard University) | Gao, Xi Alice (Harvard University) | Chen, Yiling (Harvard University) | Mao, Andrew (Harvard University) | Rand, David G. (Harvard University)
The flourishing of online labor markets such as Amazon Mechanical Turk (MTurk) makes it easy to recruit many workers for solving small tasks. We study whether information elicitation and aggregation over a combinatorial space can be achieved by integrating small pieces of potentially imprecise information, gathered from a large number of workers through simple, one-shot interactions in an online labor market. We consider the setting of predicting the ranking of $n$ competing candidates, each having a hidden underlying strength parameter. At each step, our method estimates the strength parameters from the collected pairwise comparison data and adaptively chooses another pairwise comparison question for the next recruited worker. Through an MTurk experiment, we show that the adaptive method effectively elicits and aggregates information, outperforming a naive method using a random pairwise comparison question at each step.
- North America > United States > New York > New York County > New York City (0.05)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > Oregon > Benton County > Corvallis (0.04)
- (2 more...)
- Research Report (0.69)
- Workflow (0.69)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (0.47)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.47)
- Information Technology > Communications > Social Media > Crowdsourcing (0.34)
Structure Learning in Human Causal Induction
Tenenbaum, Joshua B., Griffiths, Thomas L.
We use graphical models to explore the question of how people learn simple causal relationships from data. The two leading psychological theories can both be seen as estimating the parameters of a fixed graph. We argue that a complete account of causal induction should also consider how people learn the underlying causal graph structure, and we propose to model this inductive process as a Bayesian inference. Our argument is supported through the discussion of three data sets. 1 Introduction Causality plays a central role in human mental life. Our behavior depends upon our understanding of the causal structure of our environment, and we are remarkably good at inferring causation from mere observation. Constructing formal models of causal induction is currently a major focus of attention in computer science [7], psychology [3,6], and philosophy [5]. This paper attempts to connect these literatures, by framing the debate between two major psychological theories in the computational language of graphical models. We show that existing theories equate human causal induction with maximum likelihood parameter estimation on a fixed graphical structure, and we argue that to fully account for human behavioral data, we must also postulate that people make Bayesian inferences about the underlying causal graph structure itself.
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)